25 research outputs found
Harmonic Imaging Using a Mechanical Sector, B-MODE
An ultrasound imaging system includes transmitting ultrasound waves into a human body, collecting the reflections, manipulating the reflections, and then displaying them on computer screen as a grayscale image. The standard approach for ultrasound imaging is to use the fundamental frequency from the reflected signal to form images. However, it has been shown that images generated using the harmonic content have improved resolution as well as reduced noise, resulting in clearer images. Although harmonic imaging has been shown to return improved images, this has never been shown with a B-mode, mechanical sector ultrasound system. In this thesis, we demonstrated such a system. First there is a discussion of the theory of harmonic imaging, then a description of the ultrasound system used, and finally experimental results
Combining crowd worker, algorithm, and expert efforts to find boundaries of objects in images
While traditional approaches to image analysis have typically relied upon either manual annotation by experts or purely-algorithmic approaches, the rise of crowdsourcing now provides a new source of human labor to create training data or perform computations at run-time. Given this richer design space, how should we utilize algorithms, crowds, and experts to better annotate images? To answer this question for the important task of finding the boundaries of objects or regions in images, I focus on image segmentation, an important precursor to solving a variety of fundamental image analysis problems, including recognition, classification, tracking, registration, retrieval, and 3D visualization. The first part of the work includes a detailed analysis of the relative strengths and weaknesses of three different approaches to demarcate object boundaries in images: by experts, by crowdsourced laymen, and by automated computer vision algorithms. The second part of the work describes three hybrid system designs that integrate computer vision algorithms and crowdsourced laymen to demarcate boundaries in images. Experiments revealed that hybrid system designs yielded more accurate results than relying on algorithms or crowd workers alone and could yield segmentations that are indistinguishable from those created by biomedical experts. To encourage community-wide effort to continue working on developing methods and systems for image-based studies which can have real and measurable impact that benefit society at large, datasets and code are publicly-shared (http://www.cs.bu.edu/~betke/BiomedicalImageSegmentation/)
Combining crowd worker, algorithm, and expert efforts to find boundaries of objects in images
While traditional approaches to image analysis have typically relied upon either manual annotation by experts or purely-algorithmic approaches, the rise of crowdsourcing now provides a new source of human labor to create training data or perform computations at run-time. Given this richer design space, how should we utilize algorithms, crowds, and experts to better annotate images? To answer this question for the important task of finding the boundaries of objects or regions in images, I focus on image segmentation, an important precursor to solving a variety of fundamental image analysis problems, including recognition, classification, tracking, registration, retrieval, and 3D visualization. The first part of the work includes a detailed analysis of the relative strengths and weaknesses of three different approaches to demarcate object boundaries in images: by experts, by crowdsourced laymen, and by automated computer vision algorithms. The second part of the work describes three hybrid system designs that integrate computer vision algorithms and crowdsourced laymen to demarcate boundaries in images. Experiments revealed that hybrid system designs yielded more accurate results than relying on algorithms or crowd workers alone and could yield segmentations that are indistinguishable from those created by biomedical experts. To encourage community-wide effort to continue working on developing methods and systems for image-based studies which can have real and measurable impact that benefit society at large, datasets and code are publicly-shared (http://www.cs.bu.edu/~betke/BiomedicalImageSegmentation/)
Salient Object Detection for Images Taken by People With Vision Impairments
Salient object detection is the task of producing a binary mask for an image
that deciphers which pixels belong to the foreground object versus background.
We introduce a new salient object detection dataset using images taken by
people who are visually impaired who were seeking to better understand their
surroundings, which we call VizWiz-SalientObject. Compared to seven existing
datasets, VizWiz-SalientObject is the largest (i.e., 32,000 human-annotated
images) and contains unique characteristics including a higher prevalence of
text in the salient objects (i.e., in 68\% of images) and salient objects that
occupy a larger ratio of the images (i.e., on average, 50\% coverage). We
benchmarked seven modern salient object detection methods on our dataset and
found they struggle most with images featuring salient objects that are large,
have less complex boundaries, and lack text as well as for lower quality
images. We invite the broader community to work on our new dataset challenge by
publicly sharing the dataset at
https://vizwiz.org/tasks-and-datasets/salient-object .Comment: Computer Vision and Pattern Recognitio